Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
High-low dimensional feature guided real-time semantic segmentation network
Zixing YU, Shaojun QU, Xin HE, Zhuo WANG
Journal of Computer Applications    2023, 43 (10): 3077-3085.   DOI: 10.11772/j.issn.1001-9081.2022091438
Abstract204)   HTML18)    PDF (1967KB)(135)       Save

Most semantic segmentation networks use bilinear interpolation to restore the resolution of the high-level feature map to the same resolution as the low-level feature map and then perform fusion operation, which causes that part of high-level semantic information cannot be spatially aligned with the low-level feature map, resulting in the loss of semantic information. To solve the problem, based on the improvement of Bilateral Segmentation Network (BiSeNet), a High-Low dimensional Feature Guided real-time semantic segmentation Network (HLFGNet) was proposed. First, High-Low dimensional Feature Guided Module (HLFGM) was proposed to guide the displacement of high-level semantic information during the upsampling process through the spatial position information of the low-level feature map. At the same time, the strong feature representations were obtained by the high-level feature maps, and by combining with the attention mechanism, the redundant edge detail information in the low-level feature map was eliminated and the pixel misclassification was reduced. Then, the improved Pyramid Pooling Guided Module (PPGM) was introduced to obtain global contextual information and strengthen the effective fusion of local contextual information at different scales. Experimental results on Cityscapes validation set and CamVid test set show that HLFGNet has the mean Intersection over Union (mIoU) of 76.67% and 70.90% respectively, the frames per second reached 75.0 and 96.2 respectively. In comparison with BiSeNet, HLFGNet has the mIoU increased by 1.76 and 3.40 percentage points respectively. It can be seen that HLFGNet can accurately identify the scene information and meet the real-time requirements.

Table and Figures | Reference | Related Articles | Metrics
Traffic sign recognition model in haze weather based on YOLOv5
Jinghan YIN, Shaojun QU, Zekai YAO, Xuanye HU, Xiaoyu QIN, Pujing HUA
Journal of Computer Applications    2022, 42 (9): 2876-2884.   DOI: 10.11772/j.issn.1001-9081.2021071305
Abstract637)   HTML39)    PDF (3770KB)(441)       Save

Aiming at the problem of poor recognition precision and serious missed detection of small traffic signs in bad weather such as haze, rain and snow, a traffic sign recognition model in haze weather based on YOLOv5 (You Only Look Once version 5) was proposed. Firstly, the structure of YOLOv5 was optimized. By using contrary thinking, the problem of small object recognition difficulty was solved by reducing the depth of feature pyramid and limiting the maximum down sampling multiple. By adjusting the depth of residual module, the repeated overlapping of background features was suppressed. Secondly, the mechanisms such as data augmentation, K-means anchor and Global Non-Maximum Suppression (GNMS) were introduced into the model. Finally, the detection ability of the improved YOLOv5 facing the bad weather was verified on the Chinese traffic sign dataset TT100K, and the study on traffic sign recognition in the haze weather with the most obvious precision decline was focused on. Experimental results show that the F1-score, mean Average Precision @0.5 (mAP@0.5), mean Average Precision @0.5:0.95 (mAP@0.5:0.95) of the improved YOLOv5 model reach 0.921 50, 95.3% and 75.2%, respectively. The proposed model can maintain high-precision recognition of traffic sign in bad weather, and has Frames Per Second (FPS) up to 50, meeting the requirement of real-time detection.

Table and Figures | Reference | Related Articles | Metrics